Troubleshooting
This section offers you solutions, workarounds, and explanations for issues related to CCC.
I’m facing an issue with my CCC container managed by Podman. The container stops unexpectedly after a few hours of inactivity.
The issue appears to be related to the container being killed due to inactivity in the user session. To address this, the following steps are recommended:
Remove the existing CCC container: podman rm -f ccc
Reset Podman settings: podman system reset
Reload the CCC container image into Podman, ensuring that it is properly set up in the repository: podman load -i ccc-4.2.0.tar
Prevent user session inactivity: loginctl enable-linger $UID
This command enables the linger feature, ensuring that the user session remains active even when no one is logged in. This helps prevent the CCC container from stopping unexpectedly due to session inactivity.
Restart the CCC container: podman-compose up -d
By resetting Podman, reloading the CCC container, and enabling session linger, the container should no longer be killed due to inactivity, and secrets should be properly managed during container restarts.
I am unable to access the data within the ccc-certs, pgdata, and ccc directories as a non-container user.
The ccc-certs directory includes CCC licenses and certificates that must be uploaded within the CCC application. The pgdata directory contains CCC data, while the ccc directory records the logs generated by the CCC application. At first, all these folders are accessible to the user who intends to launch the CCC container. However, after the CCC container is initialized, the ownership of these directories is transferred to the user within the container. Consequently, non-container users will not be able to access the data stored in these directories. To gain access to the data in these directories, execute the following commands:
Podman
podman exec -it ccc bash
sudo chmod –R 777 /usr/safenet/ccc/server/standalone/log
sudo chmod –R 777 /usr/safenet/ccc/packages
sudo chmod –R 777 /usr/safenet/ccc/lunalogs
sudo chmod –R 777 /usr/safenet/ccc/user-certs
sudo chmod –R 777 /var/lib/postgresql
Kubernetes
kubectl exec -it <pod_name> bash
sudo chmod –R 777 /usr/safenet/ccc/server/standalone/log
sudo chmod –R 777 /usr/safenet/ccc/packages
sudo chmod –R 777 /usr/safenet/ccc/lunalogs
sudo chmod –R 777 /usr/safenet/ccc/user-certs
sudo chmod –R 777 /var/lib/postgresql
I'm unable to initialize the CCC container using data from the old CCC container database.
To ensure persistence, the CCC database is stored on the host machine. To initialize the CCC container using data from the old CCC container, you need to make the following changes:
Podman
In case of Podman, the /var/lib/postgresql directory of the CCC container is mapped to <ccc_distribution_folder>/podman/pgdata on the host machine. However, this mapping can be modified in the podman-compose.yml file. When the CCC container is initialized using the command "podman-compose up", it reads the volume mappings specified in the podman-compose.yml file and begins persisting data accordingly. If you want to relocate the ccc_distribution package and initialize it again, you must also move the pgdata folder to the new path <ccc_distribution_folder>/podman/pgdata to access the old data generated by CCC.
Kubernetes
In case of Kubernetes, the /var/lib/postgresql directory of the CCC container is mapped to /home/ccc/pgdata on the host machine. You can modify this setting in the postgres-data.yaml file, as required.
I cannot access CCC on Mozilla Firefox even after clicking the Accept the risk and continue button.
This issue is specific to Mozilla Firefox. You can either access CCC on Google Chrome or Microsoft Edge, or follow these steps to access CCC on Mozilla Firefox:
Click the Options tab from the menu on the right.
Click the Privacy and Security option from the navigation pane on left and then scroll down to the Certificates section.
Click the View Certificates button and then click the Servers tab from the Security Manager window that appears on the screen.
Click the Add Exception button at the bottom.
Enter the CCC path in the Add Security Exception window that appears on the screen.
Click the Get Certificate button and then click the Confirm Security Exception button after the certificate gets generated. You should now be able to access CCC on Mozilla Firefox.
I'm encountering the following message while activating CCC root of trust: "System already activated".
To resolve this issue, you need to:
Activate the ROT again by entering the partition label and password.
Select the checkbox mentioning that This device is running firmware 7.7 and above if you are using Luna HSM 7.7.0 or Luna HSM 7.7.1 having firmware 7.7.0 or 7.7.1.
Check the Remember credentials checkbox if you want CCC to cache your root of trust credentials.
Click the Activate button.
Why am I seeing an error under the Device Status column of the Monitoring and Reports tab after changing the CCC root of trust?
You are seeing this error because you haven't reconfigured the devices after changing the CCC root of trust (ROT). To reconfigure the devices:
Login to CCC and navigate to Devices.
Select the device that is displaying the error under the Device Status column.
Click the Connection tab.
Press the Update Credentials button.
In the Update Rest API Credentials window that appears, enter your username and password and then press the Update button. A pop-up message will appear on your screen, indicating that the credentials have been successfully changed.
Click the Authorization tab and then press the Re-authorize Device button.
In the Authorize SO Login window that appears, enter the HSM SO password to grant CCC the right to login to the device, and then press the Authorize button.
In a short while, the Device Status icon will turn to green and you'll be able to perform the device monitoring tasks. In case you have another device that's reflecting the same error perform the above-mentioned procedure again for that device.
I'm encountering the following error while installing Podman in non-root user mode: Podman run error in non-root mode: "user namespaces are not enabled in /proc/sys/user/max_user_namespaces"
You are encountering this error because either the user namespaces are not enabled or have a limit set that is preventing Podman from running in the non-root mode. To resolve this issue, adjust the value of user.max_user_namespaces by running the following command with sudo privileges:
sudo sysctl user.max_user_namespaces=15000
Increasing the limit on user namespaces will allow Podman to run in non-root mode successfully without encountering the error.
I'm encountering the following error while loading the CCC image when running Podman in non-root user mode: "Potentially insufficient UIDs or GIDs available in user namespace"
You are encountering this error because there are potentially insufficient UIDs or GIDs available in the user namespace. To resolve this issue, run the following commands with sudo privileges:
sudo usermod --add-subgids 10000-75535 <USERNAME>
sudo usermod --add-subuids 10000-75535 <USERNAME>
podman system migrate
These steps aim to address the issue of potentially insufficient UIDs or GIDs available in the user namespace, allowing Podman to run successfully with the non-root user.
I'm encountering a yellow icon during the LDAP/LDAPs authentication process. Additionally, in the console.log file, I found the following error details:
Exception: KC-SERVICES0055: Error when authenticating to LDAP: LDAP response read timed out, timeout used: 60 ms.: javax.naming.NamingException: LDAP response read timed out, timeout used: 60 ms.
You are experiencing this issue due to a problem with the LDAP authentication process. To resolve the problem and prevent further LDAP authentication errors, please follow these steps:
Go to the machine where the CCC container is running.
Access the container by running the command "podman exec -it ccc bash."
Navigate to the directory /usr/safenet/ccc/server/bin.
Edit the standalone.conf file using the command "vi standalone.conf."
Append the following line and save the file: JAVA_OPTS="$JAVA_OPTS -Dcom.safenetinc.lunadirector.auth.ldapconnection.timeout=30000".
Navigate to the directory /usr/safenet/ccc/scripts.
Stop the server by executing "sh server.sh STOP."
Start the server again by executing "sh server.sh START."
End the container session by running the command “exit”.
Access the GUI of CCC and log in.
Activate the ROT (if required).
Add the directory again.
What steps should I take to resolve a root-of-trust issue that has arisen after changing the HSM Admin password for the device used in CCC root-of-trust creation?
To overcome this issue, you need to execute one of the following procedures, depending on the method you’ve used for CCC installation:
If you’ve installed CCC using Podman
Remove the stored secrets using this command:
podman secret rm ccc_password
Update the secret file in the Podman directory with the correct password.
Load the updated secret file:
podman secret create ccc_password secretfile
Restart the container by running the following commands in the Podman directory:
podman-compose down
podman-compose up
If you’ve installed CCC using Kubernetes
Delete the stored secrets using this command:
kubectl delete secrets ccc-password
Update the secret with the correct password using this command:
kubectl create secret generic ccc-password \
--from-literal=CCC_TRUSTSTORE_PASSWORD='password' \
--from-literal=CCC_KEYSTORE_PASSWORD='password' \
--from-literal=CCC_CREDENTIALSTORE_PASSWORD='password' \
--from-literal=HSM_PASSWORD1='password' \
--from-literal=CRYPTO_OFFICER_PASSWORD='password' \
--from-literal=HSM_PASSWORD2='password' \
--from-literal=CCC_ADMIN_PASSWORD='password' \
--from-literal=CA_CERTIFICATE_PASSWORD='password' \
--from-literal=CCC_DB_PASSWORD='password'
Restart the container by running the following commands in the Kubernetes directory:
kubectl delete -f deployment.yaml
kubectl delete -f config-map.yaml
sh launch.sh
If you’ve installed CCC using Helm
Delete the stored secrets with this command:
kubectl delete secrets ccc-password
Update the secret with the correct password using this command:
kubectl create secret generic ccc-password \
--from-literal=CCC_TRUSTSTORE_PASSWORD='password' \
--from-literal=CCC_KEYSTORE_PASSWORD='password' \
--from-literal=CCC_CREDENTIALSTORE_PASSWORD='password' \
--from-literal=HSM_PASSWORD1='password' \
--from-literal=CRYPTO_OFFICER_PASSWORD='password' \
--from-literal=HSM_PASSWORD2='password' \
--from-literal=CCC_ADMIN_PASSWORD='password' \
--from-literal=CA_CERTIFICATE_PASSWORD='password' \
--from-literal=CCC_DB_PASSWORD='password'
Restart the container by running the following command in the Helm directory:
helm uninstall ccc
helm install ccc .
How should I address a root-of-trust issue that arises after updating the Crypto Officer password for the HSM partition I used to establish CCC root-of-trust?
To resolve this issue, kindly follow the steps designed to address a similar issue: What steps should I take to resolve a root-of-trust issue that has arisen after changing the HSM Admin password for the device used in CCC root-of-trust creation?
How should I proceed when facing a root-of-trust issue on CCC following a change in the certificate of the HSM device used for CCC root-of-trust creation?
To address this problem, perform a container restart by executing the appropriate command based on the CCC installation method you've employed:
If you’ve installed CCC using Podman
podman-compose down
podman-compose up -d
If you’ve installed CCC using Kubernetes
kubectl delete -f deployment.yaml && kubectl delete -f config-map.yaml && sh launch.sh
If you’ve installed CCC using Helm
helm uninstall ccc && helm install ccc .
How can I enable detailed error logs during CCC installation?
To enable detailed error logs during CCC installation, you can follow these steps, depending on the method you’ve used for CCC installation:
If you’ve installed CCC using Podman
Navigate to the Podman directory.
Edit the ccc_config.env
file and add this line:
DEBUG_MODE='Y'
Restart the container to see detailed logs:
podman-compose down
podman-compose up
If you’ve installed CCC using Kubernetes
Navigate to the Kubernetes directory.
Edit the config-map.yaml
file and add this line:
DEBUG_MODE='Y'
Restart the container by running the following commands:
kubectl delete -f deployment.yaml
kubectl delete -f config-map.yaml
sh launch.sh
If you’ve installed CCC using Helm
This capability will be activated in an upcoming release.
How should I address the following error that I'm receiving when I try using a newly created CCC service:
Error: A JNI error has occurred, please check your installation and try again. Exception in thread "main" java.lang.UnsupportedClassVersionError: com/safenetinc/client/LDClient has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
You may receive this error if you are using a version of Java that is not compatible with the latest version of CCC. To ensure that CCC operates smoothly, it's necessary to have Java 11 installed on the computer where the ccc_client.jar
file is stored.